Skip to content

Conversation

@angelayi
Copy link
Contributor

@angelayi angelayi commented Nov 6, 2025

This PR adds support for effectful ops within invoke_subgraphs.

  • Most of the logic is in invoke_subgraph.py_functionalize_impl.
    • In the functionalization metadata collection phase, we note the tokens before going further down the dispatcher, and then note the tokens after coming back from the dispatcher. If there are nodes in the invoke_subgraph subgraph that contain effects, the number of effects should change, or the tokens used for an effect should.
    • We will store this effect difference in the InvokeSubgraphCache where the key is the identifier and value is the effect. For now we only support one effect within a subgraph.
    • During the tracing part of AOTAutograd, we will then wrap the subgraph to take in and output a token.

Before:

def forward(self, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', x)
    return invoke_subgraph

def repeated_subgraph(self, x):
    record_memory = torch.ops.mylib.record_memory.default("forward", "N")
    add = torch.ops.aten.add(x, x)
    return add

After:

def forward(self, token, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', token, x)
    getitem = invoke_subgraph[0]  # output token
    getitem_1 = invoke_subgraph[1]
    return (getitem, getitem_1)

def repeated_subgraph(self, token, x):
    with_effects = torch.ops.higher_order.with_effects(token, torch.ops.mylib.record_memory.default, 'forward', 'N')
    getitem = with_effects[0]  # output token
    add = torch.ops.aten.add(x, x)
    return  (getitem, add)
  • Then there is a bunch of logic within _remove_effect_tokens to handle removing the effects from the invoke_subgraph subgraph

Stack from ghstack (oldest at bottom):

cc @ezyang @EikanWang @jgong5 @wenzhe-nrv

Differential Revision: D87392741

@pytorch-bot
Copy link

pytorch-bot bot commented Nov 6, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/167231

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 91eb786 with merge base 460c7e1 (image):

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

angelayi added a commit that referenced this pull request Nov 6, 2025
ghstack-source-id: 0df0ad4
Pull Request resolved: #167231
cc ezyang EikanWang jgong5 wenzhe-nrv

[ghstack-poisoned]
cc ezyang EikanWang jgong5 wenzhe-nrv

[ghstack-poisoned]
This PR adds support for effectful ops within invoke_subgraphs. 
* Most of the logic is in `invoke_subgraph.py_functionalize_impl`. 
  * In the functionalization metadata collection phase, we note the tokens before going further down the dispatcher, and then note the tokens after coming back from the dispatcher. If there are nodes in the invoke_subgraph subgraph that contain effects, the number of effects should change, or the tokens used for an effect should. 
  * We will store this effect difference in the `InvokeSubgraphCache` where the key is the identifier and value is the effect. For now we only support one effect within a subgraph.
  * During the tracing part of AOTAutograd, we will then wrap the subgraph to take in and output a token. 

Before:
```
def forward(self, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', x)
    return invoke_subgraph

def repeated_subgraph(self, x):
    record_memory = torch.ops.mylib.record_memory.default("forward", "N")
    add = torch.ops.aten.add(x, x)
    return add
```
After:
```
def forward(self, token, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', token, x)
    getitem = invoke_subgraph[0]  # output token
    getitem_1 = invoke_subgraph[1]
    return (getitem, getitem_1)

def repeated_subgraph(self, token, x):
    with_effects = torch.ops.higher_order.with_effects(token, torch.ops.mylib.record_memory.default, 'forward', 'N')
    getitem = with_effects[0]  # output token
    add = torch.ops.aten.add(x, x)
    return  (getitem, add)
```

* Then there is a bunch of logic within `_remove_effect_tokens` to handle removing the effects from the invoke_subgraph subgraph



cc ezyang EikanWang jgong5 wenzhe-nrv

[ghstack-poisoned]
This PR adds support for effectful ops within invoke_subgraphs. 
* Most of the logic is in `invoke_subgraph.py_functionalize_impl`. 
  * In the functionalization metadata collection phase, we note the tokens before going further down the dispatcher, and then note the tokens after coming back from the dispatcher. If there are nodes in the invoke_subgraph subgraph that contain effects, the number of effects should change, or the tokens used for an effect should. 
  * We will store this effect difference in the `InvokeSubgraphCache` where the key is the identifier and value is the effect. For now we only support one effect within a subgraph.
  * During the tracing part of AOTAutograd, we will then wrap the subgraph to take in and output a token. 

Before:
```
def forward(self, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', x)
    return invoke_subgraph

def repeated_subgraph(self, x):
    record_memory = torch.ops.mylib.record_memory.default("forward", "N")
    add = torch.ops.aten.add(x, x)
    return add
```
After:
```
def forward(self, token, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', token, x)
    getitem = invoke_subgraph[0]  # output token
    getitem_1 = invoke_subgraph[1]
    return (getitem, getitem_1)

def repeated_subgraph(self, token, x):
    with_effects = torch.ops.higher_order.with_effects(token, torch.ops.mylib.record_memory.default, 'forward', 'N')
    getitem = with_effects[0]  # output token
    add = torch.ops.aten.add(x, x)
    return  (getitem, add)
```

* Then there is a bunch of logic within `_remove_effect_tokens` to handle removing the effects from the invoke_subgraph subgraph



cc ezyang EikanWang jgong5 wenzhe-nrv

[ghstack-poisoned]
assert all(
isinstance(o, (torch.Tensor, int, torch.SymInt, torch.Generator))
for o in operands
if o is not None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when do you see None as input?

Copy link
Contributor Author

@angelayi angelayi Nov 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The effect tokens are passed in as None here since we will eventually just discard these inputs.

This PR adds support for effectful ops within invoke_subgraphs. 
* Most of the logic is in `invoke_subgraph.py_functionalize_impl`. 
  * In the functionalization metadata collection phase, we note the tokens before going further down the dispatcher, and then note the tokens after coming back from the dispatcher. If there are nodes in the invoke_subgraph subgraph that contain effects, the number of effects should change, or the tokens used for an effect should. 
  * We will store this effect difference in the `InvokeSubgraphCache` where the key is the identifier and value is the effect. For now we only support one effect within a subgraph.
  * During the tracing part of AOTAutograd, we will then wrap the subgraph to take in and output a token. 

Before:
```
def forward(self, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', x)
    return invoke_subgraph

def repeated_subgraph(self, x):
    record_memory = torch.ops.mylib.record_memory.default("forward", "N")
    add = torch.ops.aten.add(x, x)
    return add
```
After:
```
def forward(self, token, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', token, x)
    getitem = invoke_subgraph[0]  # output token
    getitem_1 = invoke_subgraph[1]
    return (getitem, getitem_1)

def repeated_subgraph(self, token, x):
    with_effects = torch.ops.higher_order.with_effects(token, torch.ops.mylib.record_memory.default, 'forward', 'N')
    getitem = with_effects[0]  # output token
    add = torch.ops.aten.add(x, x)
    return  (getitem, add)
```

* Then there is a bunch of logic within `_remove_effect_tokens` to handle removing the effects from the invoke_subgraph subgraph



cc ezyang EikanWang jgong5 wenzhe-nrv

[ghstack-poisoned]
This was referenced Nov 14, 2025
Khanaksahu pushed a commit to Khanaksahu/pytorch that referenced this pull request Nov 17, 2025
ghstack-source-id: 4742f7e
Pull Request resolved: pytorch/pytorch#167231
This PR adds support for effectful ops within invoke_subgraphs. 
* Most of the logic is in `invoke_subgraph.py_functionalize_impl`. 
  * In the functionalization metadata collection phase, we note the tokens before going further down the dispatcher, and then note the tokens after coming back from the dispatcher. If there are nodes in the invoke_subgraph subgraph that contain effects, the number of effects should change, or the tokens used for an effect should. 
  * We will store this effect difference in the `InvokeSubgraphCache` where the key is the identifier and value is the effect. For now we only support one effect within a subgraph.
  * During the tracing part of AOTAutograd, we will then wrap the subgraph to take in and output a token. 

Before:
```
def forward(self, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', x)
    return invoke_subgraph

def repeated_subgraph(self, x):
    record_memory = torch.ops.mylib.record_memory.default("forward", "N")
    add = torch.ops.aten.add(x, x)
    return add
```
After:
```
def forward(self, token, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', token, x)
    getitem = invoke_subgraph[0]  # output token
    getitem_1 = invoke_subgraph[1]
    return (getitem, getitem_1)

def repeated_subgraph(self, token, x):
    with_effects = torch.ops.higher_order.with_effects(token, torch.ops.mylib.record_memory.default, 'forward', 'N')
    getitem = with_effects[0]  # output token
    add = torch.ops.aten.add(x, x)
    return  (getitem, add)
```

* Then there is a bunch of logic within `_remove_effect_tokens` to handle removing the effects from the invoke_subgraph subgraph



cc ezyang EikanWang jgong5 wenzhe-nrv

[ghstack-poisoned]
This PR adds support for effectful ops within invoke_subgraphs. 
* Most of the logic is in `invoke_subgraph.py_functionalize_impl`. 
  * In the functionalization metadata collection phase, we note the tokens before going further down the dispatcher, and then note the tokens after coming back from the dispatcher. If there are nodes in the invoke_subgraph subgraph that contain effects, the number of effects should change, or the tokens used for an effect should. 
  * We will store this effect difference in the `InvokeSubgraphCache` where the key is the identifier and value is the effect. For now we only support one effect within a subgraph.
  * During the tracing part of AOTAutograd, we will then wrap the subgraph to take in and output a token. 

Before:
```
def forward(self, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', x)
    return invoke_subgraph

def repeated_subgraph(self, x):
    record_memory = torch.ops.mylib.record_memory.default("forward", "N")
    add = torch.ops.aten.add(x, x)
    return add
```
After:
```
def forward(self, token, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', token, x)
    getitem = invoke_subgraph[0]  # output token
    getitem_1 = invoke_subgraph[1]
    return (getitem, getitem_1)

def repeated_subgraph(self, token, x):
    with_effects = torch.ops.higher_order.with_effects(token, torch.ops.mylib.record_memory.default, 'forward', 'N')
    getitem = with_effects[0]  # output token
    add = torch.ops.aten.add(x, x)
    return  (getitem, add)
```

* Then there is a bunch of logic within `_remove_effect_tokens` to handle removing the effects from the invoke_subgraph subgraph



cc ezyang EikanWang jgong5 wenzhe-nrv

[ghstack-poisoned]
This PR adds support for effectful ops within invoke_subgraphs. 
* Most of the logic is in `invoke_subgraph.py_functionalize_impl`. 
  * In the functionalization metadata collection phase, we note the tokens before going further down the dispatcher, and then note the tokens after coming back from the dispatcher. If there are nodes in the invoke_subgraph subgraph that contain effects, the number of effects should change, or the tokens used for an effect should. 
  * We will store this effect difference in the `InvokeSubgraphCache` where the key is the identifier and value is the effect. For now we only support one effect within a subgraph.
  * During the tracing part of AOTAutograd, we will then wrap the subgraph to take in and output a token. 

Before:
```
def forward(self, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', x)
    return invoke_subgraph

def repeated_subgraph(self, x):
    record_memory = torch.ops.mylib.record_memory.default("forward", "N")
    add = torch.ops.aten.add(x, x)
    return add
```
After:
```
def forward(self, token, x):
    repeated_subgraph0 = self.repeated_subgraph0
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', token, x)
    getitem = invoke_subgraph[0]  # output token
    getitem_1 = invoke_subgraph[1]
    return (getitem, getitem_1)

def repeated_subgraph(self, token, x):
    with_effects = torch.ops.higher_order.with_effects(token, torch.ops.mylib.record_memory.default, 'forward', 'N')
    getitem = with_effects[0]  # output token
    add = torch.ops.aten.add(x, x)
    return  (getitem, add)
```

* Then there is a bunch of logic within `_remove_effect_tokens` to handle removing the effects from the invoke_subgraph subgraph



cc ezyang EikanWang jgong5 wenzhe-nrv

[ghstack-poisoned]
@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #167245

@angelayi
Copy link
Contributor Author

@angelayi has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 19, 2025
@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #167245

1 similar comment
@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #167245

pytorchmergebot pushed a commit that referenced this pull request Nov 19, 2025
…67245)

In the [previous PR](https://github.com/pytorch/pytorch/pull/167231/files#diff-e2b74af5d8b538a7d07d18507d27010703742ddad5f819992b55f5abc6d9a502R964-R966) we found that the autograd eager impl of invoke_subgraph calls the subgraph twice. If the subgraph contains effects then effects will be run twice, which is bad. This PR fixes the issue by getting the output metadata from `subgraph`'s `node.meta` if it exists.

Differential Revision: [D87392740](https://our.internmc.facebook.com/intern/diff/D87392740)
Pull Request resolved: #167245
Approved by: https://github.com/anijain2305
ghstack dependencies: #167231
@yangw-dev
Copy link
Contributor

@pytorchbot revert -m "break internal tests, synced with author regarding this: example error: AttributeError: 'list' object has no attribute 'dtype'" -c ghfirst

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

@pytorchmergebot
Copy link
Collaborator

Reverting PR 167231 failed

Reason: Command git -C /home/runner/work/pytorch/pytorch revert --no-edit f49833de54450b03b808a5b9ad774ce14ff2c8a2 returned non-zero exit code 1

Auto-merging test/higher_order_ops/test_with_effects.py
CONFLICT (content): Merge conflict in test/higher_order_ops/test_with_effects.py
Auto-merging torch/_higher_order_ops/invoke_subgraph.py
error: could not revert f49833de544... [hoo] Invoke subgraph + effect (#167231)
hint: After resolving the conflicts, mark them with
hint: "git add/rm <pathspec>", then run
hint: "git revert --continue".
hint: You can instead skip this commit with "git revert --skip".
hint: To abort and get back to the state before "git revert",
hint: run "git revert --abort".
hint: Disable this message with "git config set advice.mergeConflict false"
Details for Dev Infra team Raised by workflow job

@yangw-dev
Copy link
Contributor

@pytorchbot revert -m "the diff breaks tests internally " -c ghfirst

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

pytorchmergebot added a commit that referenced this pull request Nov 20, 2025
This reverts commit f49833d.

Reverted #167231 on behalf of https://github.com/yangw-dev due to the diff breaks tests internally  ([comment](#167231 (comment)))
@pytorchmergebot
Copy link
Collaborator

@angelayi your PR has been successfully reverted.

@pytorchmergebot pytorchmergebot added Reverted ci-no-td Do not run TD on this PR labels Nov 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-no-td Do not run TD on this PR ciflow/trunk Trigger trunk jobs on your pull request fx Merged release notes: export Reverted

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants